Goto

Collaborating Authors

 lesion detection


Discovering Pathology Rationale and Token Allocation for Efficient Multimodal Pathology Reasoning

Xu, Zhe, Jin, Cheng, Wang, Yihui, Liu, Ziyi, Chen, Hao

arXiv.org Artificial Intelligence

Multimodal pathological image understanding has garnered widespread interest due to its potential to improve diagnostic accuracy and enable personalized treatment through integrated visual and textual data. However, existing methods exhibit limited reasoning capabilities, which hamper their ability to handle complex diagnostic scenarios. Additionally, the enormous size of pathological images leads to severe computational burdens, further restricting their practical deployment. To address these limitations, we introduce a novel bilateral reinforcement learning framework comprising two synergistic branches. One reinforcement branch enhances the reasoning capability by enabling the model to learn task-specific decision processes, i.e., pathology rationales, directly from labels without explicit reasoning supervision. While the other branch dynamically allocates a tailored number of tokens to different images based on both their visual content and task context, thereby optimizing computational efficiency. We apply our method to various pathological tasks such as visual question answering, cancer subtyping, and lesion detection. Extensive experiments show an average +41.7 absolute performance improvement with 70.3% lower inference costs over the base models, achieving both reasoning accuracy and computational efficiency.


The iToBoS dataset: skin region images extracted from 3D total body photographs for lesion detection

Saha, Anup, Adeola, Joseph, Ferrera, Nuria, Mothershaw, Adam, Rezze, Gisele, Gaborit, Séraphin, D'Alessandro, Brian, Hudson, James, Szabó, Gyula, Pataki, Balazs, Rajani, Hayat, Nazari, Sana, Hayat, Hassan, Primiero, Clare, Soyer, H. Peter, Malvehy, Josep, Garcia, Rafael

arXiv.org Artificial Intelligence

Artificial intelligence has significantly advanced skin cancer diagnosis by enabling rapid and accurate detection of malignant lesions. In this domain, most publicly available image datasets consist of single, isolated skin lesions positioned at the center of the image. While these lesion-centric datasets have been fundamental for developing diagnostic algorithms, they lack the context of the surrounding skin, which is critical for improving lesion detection. The iToBoS dataset was created to address this challenge. It includes 16,954 images of skin regions from 100 participants, captured using 3D total body photography. Each image roughly corresponds to a $7 \times 9$ cm section of skin with all suspicious lesions annotated using bounding boxes. Additionally, the dataset provides metadata such as anatomical location, age group, and sun damage score for each image. This dataset aims to facilitate training and benchmarking of algorithms, with the goal of enabling early detection of skin cancer and deployment of this technology in non-clinical environments.


Sine Wave Normalization for Deep Learning-Based Tumor Segmentation in CT/PET Imaging

Ren, Jintao, Li, Muheng, Korreman, Stine Sofia

arXiv.org Artificial Intelligence

This report presents a normalization block for automated tumor segmentation in CT/PET scans, developed for the autoPET III Challenge. The key innovation is the introduction of the SineNormal, which applies periodic sine transformations to PET data to enhance lesion detection. By highlighting intensity variations and producing concentric ring patterns in PET highlighted regions, the model aims to improve segmentation accuracy, particularly for challenging multitracer PET datasets. The code for this project is available on GitHub.


Learning a Clinically-Relevant Concept Bottleneck for Lesion Detection in Breast Ultrasound

Bunnell, Arianna, Glaser, Yannik, Valdez, Dustin, Wolfgruber, Thomas, Altamirano, Aleen, González, Carol Zamora, Hernandez, Brenda Y., Sadowski, Peter, Shepherd, John A.

arXiv.org Artificial Intelligence

Detecting and classifying lesions in breast ultrasound images is a promising application of artificial intelligence (AI) for reducing the burden of cancer in regions with limited access to mammography. Such AI systems are more likely to be useful in a clinical setting if their predictions can be explained to a radiologist. This work proposes an explainable AI model that provides interpretable predictions using a standard lexicon from the American College of Radiology's Breast Imaging and Reporting Data System (BI-RADS). The model is a deep neural network featuring a concept bottleneck layer in which known BI-RADS features are predicted before making a final cancer classification. This enables radiologists to easily review the predictions of the AI system and potentially fix errors in real time by modifying the concept predictions. In experiments, a model is developed on 8,854 images from 994 women with expert annotations and histological cancer labels. The model outperforms state-of-the-art lesion detection frameworks with 48.9 average precision on the held-out testing set, and for cancer classification, concept intervention is shown to increase performance from 0.876 to 0.885 area under the receiver operating characteristic curve.


Discrepancy-based Diffusion Models for Lesion Detection in Brain MRI

Fan, Keqiang, Cai, Xiaohao, Niranjan, Mahesan

arXiv.org Artificial Intelligence

Diffusion probabilistic models (DPMs) have exhibited significant effectiveness in computer vision tasks, particularly in image generation. However, their notable performance heavily relies on labelled datasets, which limits their application in medical images due to the associated high-cost annotations. Current DPM-related methods for lesion detection in medical imaging, which can be categorized into two distinct approaches, primarily rely on image-level annotations. The first approach, based on anomaly detection, involves learning reference healthy brain representations and identifying anomalies based on the difference in inference results. In contrast, the second approach, resembling a segmentation task, employs only the original brain multi-modalities as prior information for generating pixel-level annotations. In this paper, our proposed model - discrepancy distribution medical diffusion (DDMD) - for lesion detection in brain MRI introduces a novel framework by incorporating distinctive discrepancy features, deviating from the conventional direct reliance on image-level annotations or the original brain modalities. In our method, the inconsistency in image-level annotations is translated into distribution discrepancies among heterogeneous samples while preserving information within homogeneous samples. This property retains pixel-wise uncertainty and facilitates an implicit ensemble of segmentation, ultimately enhancing the overall detection performance. Thorough experiments conducted on the BRATS2020 benchmark dataset containing multimodal MRI scans for brain tumour detection demonstrate the great performance of our approach in comparison to state-of-the-art methods.


Convolutional Neural Networks Towards Facial Skin Lesions Detection

Sarshar, Reza, Heydari, Mohammad, Noughabi, Elham Akhondzadeh

arXiv.org Artificial Intelligence

Facial analysis has emerged as a prominent area of research with diverse applications, including cosmetic surgery programs, the beauty industry, photography, and entertainment. Manipulating patient images often necessitates professional image processing software. This study contributes by providing a model that facilitates the detection of blemishes and skin lesions on facial images through a convolutional neural network and machine learning approach. The proposed method offers advantages such as simple architecture, speed and suitability for image processing while avoiding the complexities associated with traditional methods. The model comprises four main steps: area selection, scanning the chosen region, lesion diagnosis, and marking the identified lesion. Raw data for this research were collected from a reputable clinic in Tehran specializing in skincare and beauty services. The dataset includes administrative information, clinical data, and facial and profile images. A total of 2300 patient images were extracted from this raw data. A software tool was developed to crop and label lesions, with input from two treatment experts. In the lesion preparation phase, the selected area was standardized to 50 * 50 pixels. Subsequently, a convolutional neural network model was employed for lesion labeling. The classification model demonstrated high accuracy, with a measure of 0.98 for healthy skin and 0.97 for lesioned skin specificity. Internal validation involved performance indicators and cross-validation, while external validation compared the model's performance indicators with those of the transfer learning method using the Vgg16 deep network model. Compared to existing studies, the results of this research showcase the efficacy and desirability of the proposed model and methodology.


Decoupled conditional contrastive learning with variable metadata for prostate lesion detection

Ruppli, Camille, Gori, Pietro, Ardon, Roberto, Bloch, Isabelle

arXiv.org Artificial Intelligence

Early diagnosis of prostate cancer is crucial for efficient treatment. Multi-parametric Magnetic Resonance Images (mp-MRI) are widely used for lesion detection. The Prostate Imaging Reporting and Data System (PI-RADS) has standardized interpretation of prostate MRI by defining a score for lesion malignancy. PI-RADS data is readily available from radiology reports but is subject to high inter-reports variability. We propose a new contrastive loss function that leverages weak metadata with multiple annotators per sample and takes advantage of inter-reports variability by defining metadata confidence. By combining metadata of varying confidence with unannotated data into a single conditional contrastive loss function, we report a 3% AUC increase on lesion detection on the public PI-CAI challenge dataset.


Mining Negative Temporal Contexts For False Positive Suppression In Real-Time Ultrasound Lesion Detection

Yu, Haojun, Li, Youcheng, Wu, QuanLin, Zhao, Ziwei, Chen, Dengbo, Wang, Dong, Wang, Liwei

arXiv.org Artificial Intelligence

During ultrasonic scanning processes, real-time lesion detection can assist radiologists in accurate cancer diagnosis. However, this essential task remains challenging and underexplored. General-purpose real-time object detection models can mistakenly report obvious false positives (FPs) when applied to ultrasound videos, potentially misleading junior radiologists. One key issue is their failure to utilize negative symptoms in previous frames, denoted as negative temporal contexts (NTC) [15]. To address this issue, we propose to extract contexts from previous frames, including NTC, with the guidance of inverse optical flow. By aggregating extracted contexts, we endow the model with the ability to suppress FPs by leveraging NTC. We call the resulting model UltraDet. The proposed UltraDet demonstrates significant improvement over previous state-of-the-arts and achieves real-time inference speed.


Lesion Detection on Leaves using Class Activation Maps

Uysal, Enes Sadi, Sen, Deniz, Ornek, Ahmet Haydar, Yetkin, Ahmet Emin

arXiv.org Artificial Intelligence

Lesion detection on plant leaves is a critical task in plant pathology and agricultural research. Identifying lesions enables assessing the severity of plant diseases and making informed decisions regarding disease control measures and treatment strategies. To detect lesions, there are studies that propose well-known object detectors. However, training object detectors to detect small objects such as lesions can be problematic. In this study, we propose a method for lesion detection on plant leaves utilizing class activation maps generated by a ResNet-18 classifier. In the test set, we achieved a 0.45 success rate in predicting the locations of lesions in leaves. Our study presents a novel approach for lesion detection on plant leaves by utilizing CAMs generated by a ResNet classifier while eliminating the need for a lesion annotation process.


BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations

Chen, Yuanhong, Liu, Yuyuan, Wang, Chong, Elliott, Michael, Kwok, Chun Fung, Pena-Solorzano, Carlos, Tian, Yu, Liu, Fengbei, Frazer, Helen, McCarthy, Davis J., Carneiro, Gustavo

arXiv.org Artificial Intelligence

Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e., without lesion localisation). Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it. The first option will reduce detection accuracy because it does not use the whole dataset, and the second option is too expensive given that the annotation needs to be done by expert radiologists. In this paper, we propose a middle-ground solution for the dilemma, which is to formulate the training as a weakly- and semi-supervised learning problem that we refer to as malignant breast lesion detection with incomplete annotations. To address this problem, our new method comprises two stages, namely: 1) pre-training a multi-view mammogram classifier with weak supervision from the whole dataset, and 2) extending the trained classifier to become a multi-view detector that is trained with semi-supervised student-teacher learning, where the training set contains fully and weakly-annotated mammograms. We provide extensive detection results on two real-world screening mammogram datasets containing incomplete annotations, and show that our proposed approach achieves state-of-the-art results in the detection of malignant breast lesions with incomplete annotations.